Conversation
…o matches (#5760) * fix(agentflow): prevent ConditionAgent from silently dropping when no scenario matches the ConditionAgent was doing strict exact string matching against scenario descriptions, but LLMs often return abbreviated or slightly different versions of the scenario text. when nothing matched, all branches got marked as unfulfilled and the flow silently terminated with no response. added fallback matching (startsWith, includes) so partial matches still route correctly, plus a last-resort else branch so the flow never just dies silently. also added a safety net in the execution engine to catch the case where all conditions are unfulfilled. fixes #5620 * refactor: normalize output once and drop unnecessary any casts - normalize calledOutputName once before all matching steps instead of calling toLowerCase().trim() repeatedly - remove explicit any types where inference handles it * test(agentflow): cover ConditionAgent scenario matching fallbacks * Update matchScenario.test.ts * Update matchScenario.ts --------- Co-authored-by: Henry Heng <henryheng@flowiseai.com>
…Smith, and other providers (fixes #5763) (#5764) * fix(analytics): capture token usage and model for Langfuse, LangSmith, and other providers What changed ------------ - handler.ts: Extended onLLMEnd() to accept string | structured output. When structured output is passed, we now extract content, usageMetadata (input/ output/total tokens), and responseMetadata (model name) and forward them to all analytics providers. Added usage/model to Langfuse generation.end(), LangSmith llm_output, and token attributes for Lunary, LangWatch, Arize, Phoenix, and Opik. Call langfuse.flushAsync() after generation.end() so updates are sent before the request completes. - LLM.ts: Pass full output object from prepareOutputObject() to onLLMEnd instead of finalResponse string, so usage and model are available. - Agent.ts: Same as LLM.ts — pass output object to onLLMEnd. - ConditionAgent.ts: Build analyticsOutput with content, usageMetadata, and responseMetadata from the LLM response and pass to onLLMEnd. - handler.test.ts: Added unit tests for the extraction logic (string vs object, token field normalization, model name sources, missing fields). OpenAIAssistant.ts call sites unchanged (Assistants API; no usage data). Why --- Fixes #5763. Analytics (Langfuse, LangSmith, etc.) were only receiving plain text from onLLMEnd; usage_metadata and response_metadata from AIMessage were dropped, so token counts and model names were missing in dashboards and cost tracking. Testing ------- - pnpm build succeeds with no TypeScript errors. - Manual: Flowise started, Agentflow with ChatOpenAI run; LangSmith and Langfuse both show token usage and model on the LLM generation. - Backward compatible: call sites that pass a string (e.g. OpenAIAssistant) still work; onLLMEnd treats string as content-only. Co-authored-by: Cursor <cursoragent@cursor.com> * refactor(analytics): address PR review feedback for token usage handling - LangSmith: Only include token_usage properties that have defined values to avoid passing undefined to the API - Extract common OpenTelemetry span logic into _endOtelSpan helper method used by arize, phoenix, and opik providers Co-authored-by: Cursor <cursoragent@cursor.com> * fix(analytics): LangSmith cost tracking and flow name in traces - LangSmith: set usage_metadata and ls_model_name/ls_provider on run extra.metadata so LangSmith can compute costs from token counts (compatible with langsmith 0.1.6 which has no end(metadata) param). Infer ls_provider from model name. - buildAgentflow: use chatflow.name as analytics trace/run name instead of hardcoded 'Agentflow' so LangSmith and Langfuse show the Flowise flow name. Co-authored-by: Cursor <cursoragent@cursor.com> * update handlers to include model and provider for analytics * fix: normalize provider names in analytics handler to include 'amazon_bedrock' --------- Co-authored-by: Cursor <cursoragent@cursor.com> Co-authored-by: Henry <hzj94@hotmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )